Author
|
Topic: NAS 10 Years Later: What's changed?
|
Dan Mangan Member
|
posted 12-02-2012 10:21 AM
Given some of the tangents on recent threads, I thought it might be an opportune time to revisit the 2002 NAS report on polygraph.BTW, I'm regularly surprised by the number of people in the polygraph field who are unfamiliar with the NAS report. True, the book itself can be very heavy reading, but the bottom lines are pretty straightforward. Here's a link to committee chair Stephen Fienberg's opening statement to the press, given at the 10/08/2002 news conference. The statement is a quick and easy read, and it sums things up -- from the committee's perspective -- rather tidily. http://www.stat.cmu.edu/~fienberg/Polygraph_News/NAS-NRC-Release/NASopeningst atement-10-8-02.html Here are a few snippets: The science base is severely limited in three key respects. First, there is almost no evidence assessing accuracy in realistic security-screening situations. Second, we found no studies on testing alleged terrorists and spies. And third, there is very limited evidence on whether efforts to beat the tests, known as countermeasures, can deceive experienced examiners. * * * However, the research base shows only limited correspondence between deception and the physiological responses monitored by the polygraph. In particular, responses typically viewed as an indication of deception can have other causes, so polygraph testing is intrinsically susceptible to producing errors. * * * Even if the validity of the technique is weak, could it still have utility? Indirect evidence supports the idea that the polygraph may have utility for deterring violations and eliciting confessions, but this utility depends on examinees believing that the polygraph is accurate. In the long run, utility depends on validity. * * * It's been 10 years since the NAS report, and it appears that the same problems are unresolved. Is there any reason to believe things will be much different in another 10 years? Is the polygraph "maxed out"? Here's the link again: www.stat.cmu.edu/~fienberg/Polygraph_News/NAS-NRC-Release/NASopeningstatement -10-8-02.html Read (or review) Fienberg's comments and tell us what you think. Dan [This message has been edited by Dan Mangan (edited 12-02-2012).] IP: Logged |
rnelson Member
|
posted 12-02-2012 12:32 PM
Dan and everyone: quote: The science base is severely limited in three key respects. First, there is almost no evidence assessing accuracy in realistic security-screening situations. Second, we found no studies on testing alleged terrorists and spies. And third, there is very limited evidence on whether efforts to beat the tests, known as countermeasures, can deceive experienced examiners.* * * However, the research base shows only limited correspondence between deception and the physiological responses monitored by the polygraph. In particular, responses typically viewed as an indication of deception can have other causes, so polygraph testing is intrinsically susceptible to producing errors. * * * Even if the validity of the technique is weak, could it still have utility? Indirect evidence supports the idea that the polygraph may have utility for deterring violations and eliciting confessions, but this utility depends on examinees believing that the polygraph is accurate. In the long run, utility depends on validity.
There are a number of problems with these very general statements. They appear to be carefully crafted to make an impression on impressionable people who do look beyond the simple hyperbole. Start with the first paragraph: 1) I suspect there are studies on realistic screening scenarios that we don't have access to. Not completely satisfying, but I think it is a mistake to assume that no work has been done in this area. Screening studies were, and still are, an important deficiency in our knowledge-base. But we are at least beginning to define the mathematical and decision theoretic issues, and we not have initial estimates of the confidence range for accuracy of exams that are interpreted with the assumption of independent criterion variance. Again, not a completely satisfying change, but we have done something really important in these 10 years. We will undoubtedly need to continue doing more. 2) Just exactly how would someone do a "study" on "alleged terrorists or spies." It is an impossibility unless its a case study. Thing is: case studies are anecdotes. They are good for hypothesizing and teaching and asking questions. Case studies cannot answer questions or validate anything. The NRC committee members know this. So you can decide for yourself why they might write like this. 3) There is limited information on countermeasures... a handful of studies. Limited? Yes. Useless? No. It is very general statement intended convey an impression more than facts. -- more later -- Sure the NRC report is important. It still is. Frank Horvath was very direct with us at APA 2011 and stated the meta-analysis was not necessary because we should use the NRC report. The thing is this: we thought it was necessary. We thought that the NRC report did not answer our questions. What we learned, the NRC report answered some of our questions better than we may have realized at the time. It is important to read the NRC report, but it will also be important to not take it literally. r ------------------ "Gentlemen, you can't fight in here. This is the war room." --(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)
IP: Logged |
Dan Mangan Member
|
posted 12-02-2012 09:47 PM
quote: It is important to read the NRC report, but it will also be important to not take it literally.
Interesting. I'm sure this is what the NAS scientists who were on the polygraph committee say about the APA's meta-analysis... IP: Logged |
rnelson Member
|
posted 12-03-2012 07:50 AM
Maybe they should...By "literally" I mean that we should not accept their judgement, or the basis for their judgement, as a completely adequate summation or answer to questions of polygraph validity. No study is perfect, or complete. And everyone has a tendency to present information from their own limited perspective. We are criticize routinely, and we even honor that criticism and take it literally, because "polygraph cannot measure lies" per se - meaning that there is no perfect and singular correlation between any particular physiological response and the act of deception. Well, all human physiological reactions are correlated with multiple phenomena. That makes for efficiency and versatility. As I mentioned before, other tests also encounter this phenomena, more or less. It turns out that pregnancy tests don't actually measure pregnancy. They measure the presence of a hormone that is present when someone is pregnant. But now we know that the hormone is not unique to pregnancy. The NRC report is very useful. But it is not the final answer regarding polygraph validity, any more than the 1983 OTA report was. .02 r ------------------ "Gentlemen, you can't fight in here. This is the war room." --(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)
IP: Logged |
Dan Mangan Member
|
posted 12-03-2012 08:40 AM
Ray,The mystique of polygraphy extends well beyond the polygraph suite, it seems. Even the research, to quote Justice Thomas in Scheffer, "is fraught with uncertainties." It makes one wonder: Under whose stewardship, and through what machinations, could a "bulletproof" study be conducted? An institution such as the Massachusetts Institute of Technology comes to my mind, but, if a school were to be used, perhaps it should be one geared towards psychology. Dunno. Here's a personal anecdote... In 2007 I entertained the idea of getting a second master's degree, this time in psychology. I made an appointment with the chairman of the psych department at my alma mater, the University of Massachusetts/Lowell, to discuss the program. Because one of the school's master's degrees in psych was a "Psychology in the Community" kind of deal, I had an elaborate plan all mapped out. My focus would be PCSOT, and would tie sex offenders in the with treatment in prison, therapeutic aftercare, parole/probation supervision, and community safety. This oughta be a slam-dunk, right? The chairman was unfamiliar with PCSOT, so I walked him through it. When I concluded my explanations, and then expressed my vision of how I would like to approach the master's program, the dude looked at me as if I had two heads. He dismissed the notion altogether, saying something like, "The polygraph? No. That's not a direction this department wants to go in." His comment floored me. I was stunned. And disgusted. At the time i was working for the NHDOC running PCSOT tests behind the walls, and saw how all the community dots connected. I left the meeting wondering why my proposal fell flat. Perhaps I should have followed up, or try another school, but time passed and I didn't. The reason I bring this up should be obvious. This particular chairperson, I believe, had a bias against polygraph. So selecting the venue and mechanisms for truly independent and disinterested polygraph studies may not be all that easy. These things we know: o The indu$try shouldn't do it. o The gummint can't be trusted. o Some colleges/universities might have their own bias. What's left? The notion of a lone grad student living an a meager stipend from the APA doesn't inspire confidence, and it's tainted by the indu$try connection. Do you -- or does anyone else -- have any other ideas on how independent studies could be run? Dan [This message has been edited by Dan Mangan (edited 12-03-2012).] IP: Logged |
rnelson Member
|
posted 12-03-2012 10:39 AM
Dan,Coupla things... First: it is important to refrain from limiting our thinking to three-second sound-bites like "frought with uncertainties." Harsh and impressive for sure, but all research is frought with uncertainties. The sound-bite tells us nothing about what exactly those uncertainties are and how exactly those uncertainties do or do not undermine our confidence in the research results. Second, we need to refrain from endorsing or legitimizing other professional's misperceptions about the polygraph, including what it is, what it is not and what it can and can not do. and finally: I think you missed my point about independence. I was being facetious in eliminating everyone except out-of-touch and impractical academics, and critics, from every contributing meaningfully to our knowledge-base. To hold the idea that we have no obligation to study our own work is to encourage irresponsibility. Sure we need independence, but that does not mean that nothing can be learned from studying things ourselves. For example: even tiny non-blind pilot studies can be helpful, useful and informative. They tell us about our initial baseline expectations, whether a larger study is likely to work and what not to waste time with. Keep in mind that there are degrees of independence. For example: I have no financial interest in the Backster techniques and do not use them, yet it seems that our tiny flawed little projects do seem to support the idea that the accuracy of the Backster technique may not be out of line with other techniques. It is important to recognize the issues and acknowledge the variables, and confounds. Remember: all studies are confounded, and all theories are ultimately wrong (because they are incomplete and there is always something more to learn). If you insist that we can know nothing until we have the perfect study, then you will wait a long time. Perfectionism is a form of black-and-white concrete thinking that interferes with our ability to actually learn and achieve anything.
For these reasons, I find your assumptions about what we know... quote: The indu$try shouldn't do it. o The gummint can't be trusted. o Some colleges/universities might have their own bias.
to be troublesomely absolute and concrete. Governments evidently need accountability, just like financial investors (who make a living taking risks) need accountability, just like pharmaceutical companies need accountability. If we took the position that nobody does anything until someone comes along with the motivation and resources and skills to do completely independent unbiased research, then we will wait a long long time and the world will pass us by. It is axiomatic that all research samples are biased, and therefor all research is biased. Biased simply means that the sample and therefore the conclusions are an imperfect representation of the entire population. So what we do is honor and admit our biases and name the confounds and try to consider how this might affect the best-case or worst-case scenario. BTW, science is often all about the worst-case scenario. -- more later -- gotta really bad cold now from all those friggin' airplanes. r ------------------ "Gentlemen, you can't fight in here. This is the war room." --(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)
IP: Logged | |